118 research outputs found
JIDT: An information-theoretic toolkit for studying the dynamics of complex systems
Complex systems are increasingly being viewed as distributed information
processing systems, particularly in the domains of computational neuroscience,
bioinformatics and Artificial Life. This trend has resulted in a strong uptake
in the use of (Shannon) information-theoretic measures to analyse the dynamics
of complex systems in these fields. We introduce the Java Information Dynamics
Toolkit (JIDT): a Google code project which provides a standalone, (GNU GPL v3
licensed) open-source code implementation for empirical estimation of
information-theoretic measures from time-series data. While the toolkit
provides classic information-theoretic measures (e.g. entropy, mutual
information, conditional mutual information), it ultimately focusses on
implementing higher-level measures for information dynamics. That is, JIDT
focusses on quantifying information storage, transfer and modification, and the
dynamics of these operations in space and time. For this purpose, it includes
implementations of the transfer entropy and active information storage, their
multivariate extensions and local or pointwise variants. JIDT provides
implementations for both discrete and continuous-valued data for each measure,
including various types of estimator for continuous data (e.g. Gaussian,
box-kernel and Kraskov-Stoegbauer-Grassberger) which can be swapped at run-time
due to Java's object-oriented polymorphism. Furthermore, while written in Java,
the toolkit can be used directly in MATLAB, GNU Octave, Python and other
environments. We present the principles behind the code design, and provide
several examples to guide users.Comment: 37 pages, 4 figure
Bits from Biology for Computational Intelligence
Computational intelligence is broadly defined as biologically-inspired
computing. Usually, inspiration is drawn from neural systems. This article
shows how to analyze neural systems using information theory to obtain
constraints that help identify the algorithms run by such systems and the
information they represent. Algorithms and representations identified
information-theoretically may then guide the design of biologically inspired
computing systems (BICS). The material covered includes the necessary
introduction to information theory and the estimation of information theoretic
quantities from neural data. We then show how to analyze the information
encoded in a system about its environment, and also discuss recent
methodological developments on the question of how much information each agent
carries about the environment either uniquely, or redundantly or
synergistically together with others. Last, we introduce the framework of local
information dynamics, where information processing is decomposed into component
processes of information storage, transfer, and modification -- locally in
space and time. We close by discussing example applications of these measures
to neural data and other complex systems
Neuroevolution on the Edge of Chaos
Echo state networks represent a special type of recurrent neural networks.
Recent papers stated that the echo state networks maximize their computational
performance on the transition between order and chaos, the so-called edge of
chaos. This work confirms this statement in a comprehensive set of experiments.
Furthermore, the echo state networks are compared to networks evolved via
neuroevolution. The evolved networks outperform the echo state networks,
however, the evolution consumes significant computational resources. It is
demonstrated that echo state networks with local connections combine the best
of both worlds, the simplicity of random echo state networks and the
performance of evolved networks. Finally, it is shown that evolution tends to
stay close to the ordered side of the edge of chaos.Comment: To appear in Proceedings of the Genetic and Evolutionary Computation
Conference 2017 (GECCO '17
Inferring effective computational connectivity using incrementally conditioned multivariate transfer entropy
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Generalised Measures of Multivariate Information Content
The entropy of a pair of random variables is commonly depicted using a Venn
diagram. This representation is potentially misleading, however, since the
multivariate mutual information can be negative. This paper presents new
measures of multivariate information content that can be accurately depicted
using Venn diagrams for any number of random variables. These measures
complement the existing measures of multivariate mutual information and are
constructed by considering the algebraic structure of information sharing. It
is shown that the distinct ways in which a set of marginal observers can share
their information with a non-observing third party corresponds to the elements
of a free distributive lattice. The redundancy lattice from partial information
decomposition is then subsequently and independently derived by combining the
algebraic structures of joint and shared information content.Comment: 31 pages, 11 figure
Pointwise Partial Information Decomposition using the Specificity and Ambiguity Lattices
What are the distinct ways in which a set of predictor variables can provide
information about a target variable? When does a variable provide unique
information, when do variables share redundant information, and when do
variables combine synergistically to provide complementary information? The
redundancy lattice from the partial information decomposition of Williams and
Beer provided a promising glimpse at the answer to these questions. However,
this structure was constructed using a much criticised measure of redundant
information, and despite sustained research, no completely satisfactory
replacement measure has been proposed. In this paper, we take a different
approach, applying the axiomatic derivation of the redundancy lattice to a
single realisation from a set of discrete variables. To overcome the difficulty
associated with signed pointwise mutual information, we apply this
decomposition separately to the unsigned entropic components of pointwise
mutual information which we refer to as the specificity and ambiguity. This
yields a separate redundancy lattice for each component. Then based upon an
operational interpretation of redundancy, we define measures of redundant
specificity and ambiguity enabling us to evaluate the partial information atoms
in each lattice. These atoms can be recombined to yield the sought-after
multivariate information decomposition. We apply this framework to canonical
examples from the literature and discuss the results and the various properties
of the decomposition. In particular, the pointwise decomposition using
specificity and ambiguity satisfies a chain rule over target variables, which
provides new insights into the so-called two-bit-copy example.Comment: 31 pages, 10 figures. (v1: preprint; v2: as accepted; v3: title
corrected
Inferring network properties from time series using transfer entropy and mutual information: validation of multivariate versus bivariate approaches
Functional and effective networks inferred from time series are at the core
of network neuroscience. Interpreting their properties requires inferred
network models to reflect key underlying structural features; however, even a
few spurious links can distort network measures, challenging functional
connectomes. We study the extent to which micro- and macroscopic properties of
underlying networks can be inferred by algorithms based on mutual information
and bivariate/multivariate transfer entropy. The validation is performed on two
macaque connectomes and on synthetic networks with various topologies (regular
lattice, small-world, random, scale-free, modular). Simulations are based on a
neural mass model and on autoregressive dynamics (employing Gaussian estimators
for direct comparison to functional connectivity and Granger causality). We
find that multivariate transfer entropy captures key properties of all networks
for longer time series. Bivariate methods can achieve higher recall
(sensitivity) for shorter time series but are unable to control false positives
(lower specificity) as available data increases. This leads to overestimated
clustering, small-world, and rich-club coefficients, underestimated shortest
path lengths and hub centrality, and fattened degree distribution tails.
Caution should therefore be used when interpreting network properties of
functional connectomes obtained via correlation or pairwise statistical
dependence measures, rather than more holistic (yet data-hungry) multivariate
models
Probability Mass Exclusions and the Directed Components of Pointwise Mutual Information
This paper examines how an event from one random variable provides pointwise
mutual information about an event from another variable via probability mass
exclusions. We start by introducing probability mass diagrams, which provide a
visual representation of how a prior distribution is transformed to a posterior
distribution through exclusions. With the aid of these diagrams, we identify
two distinct types of probability mass exclusions---namely informative and
misinformative exclusions. Then, motivated by Fano's derivation of the
pointwise mutual information, we propose four postulates which aim to decompose
the pointwise mutual information into two separate informational components: a
non-negative term associated with the informative exclusion and a non-positive
term associated with the misinformative exclusions. This yields a novel
derivation of a familiar decomposition of the pointwise mutual information into
entropic components. We conclude by discussing the relevance of considering
information in terms of probability mass exclusions to the ongoing effort to
decompose multivariate information.Comment: 6 pages, 7 figure
- …